15 research outputs found

    An evaluation of local action descriptors for human action classification in the presence of occlusion

    Get PDF
    This paper examines the impact that the choice of local de- scriptor has on human action classifier performance in the presence of static occlusion. This question is important when applying human action classification to surveillance video that is noisy, crowded, complex and incomplete. In real-world scenarios, it is natural that a human can be occluded by an object while carrying out different actions. However, it is unclear how the performance of the proposed action descriptors are affected by the associated loss of information. In this paper, we evaluate and compare the classification performance of the state-of-art human local action descriptors in the presence of varying degrees of static occlusion. We consider four different local action descriptors: Trajectory (TRAJ), Histogram of Orientation Gradient (HOG), Histogram of Orientation Flow (HOF) and Motion Boundary Histogram (MBH). These descriptors are combined with a standard bag-of-features representation and a Support Vector Machine classifier for action recognition. We investigate the performance of these descriptors and their possible combinations with respect to varying amounts of artificial occlusion in the KTH action dataset. This preliminary investigation shows that MBH in combination with TRAJ has the best performance in the case of partial occlusion while TRAJ in combination with MBH achieves the best results in the presence of heavy occlusion

    Action recognition based on sparse motion trajectories

    Get PDF
    We present a method that extracts effective features in videos for human action recognition. The proposed method analyses the 3D volumes along the sparse motion trajectories of a set of interest points from the video scene. To represent human actions, we generate a Bag-of-Features (BoF) model based on extracted features, and finally a support vector machine is used to classify human activities. Evaluation shows that the proposed features are discriminative and computationally efficient. Our method achieves state-of-the-art performance with the standard human action recognition benchmarks, namely KTH and Weizmann datasets

    Action recognition in video using a spatial-temporal graph-based feature representation

    Get PDF
    We propose a video graph based human action recognition framework. Given an input video sequence, we extract spatio-temporal local features and construct a video graph to incorporate appearance and motion constraints to reflect the spatio-temporal dependencies among features. them. In particular, we extend a popular dbscan density-based clustering algorithm to form an intuitive video graph. During training, we estimate a linear SVM classifier using the standard Bag-of-words method. During classification, we apply Graph-Cut optimization to find the most frequent action label in the constructed graph and assign this label to the test video sequence. The proposed approach achieves stateof-the-art performance with standard human action recognition benchmarks, namely KTH and UCF-sports datasets and competitive results for the Hollywood (HOHA) dataset

    The impact of video transcoding parameters on event detection for surveillance systems

    Get PDF
    The process of transcoding videos apart from being computationally intensive, can also be a rather complex procedure. The complexity refers to the choice of appropriate parameters for the transcoding engine, with the aim of decreasing video sizes, transcoding times and network bandwidth without degrading video quality beyond some threshold that event detectors lose their accuracy. This paper explains the need for transcoding, and then studies different video quality metrics. Commonly used algorithms for motion and person detection are briefly described, with emphasis in investigating the optimum transcoding configuration parameters. The analysis of the experimental results reveals that the existing video quality metrics are not suitable for automated systems, and that the detection of persons is affected by the reduction of bit rate and resolution, while motion detection is more sensitive to frame rate

    SAVASA project @ TRECVID 2012: interactive surveillance event detection

    Get PDF
    In this paper we describe our participation in the interactive surveillance event detection task at TRECVid 2012. The system we developed was comprised of individual classifiers brought together behind a simple video search interface that enabled users to select relevant segments based on down~sampled animated gifs. Two types of user -- `experts' and `end users' -- performed the evaluations. Due to time constraints we focussed on three events -- ObjectPut, PersonRuns and Pointing -- and two of the five available cameras (1 and 3). Results from the interactive runs as well as discussion of the performance of the underlying retrospective classifiers are presented

    SAVASA project @ TRECVid 2013: semantic indexing and interactive surveillance event detection

    Get PDF
    In this paper we describe our participation in the semantic indexing (SIN) and interactive surveillance event detection (SED) tasks at TRECVid 2013 [11]. Our work was motivated by the goals of the EU SAVASA project (Standards-based Approach to Video Archive Search and Analysis) which supports search over multiple video archives. Our aims were: to assess a standard object detection methodology (SIN); evaluate contrasting runs in automatic event detection (SED) and deploy a distributed, cloud-based search interface for the interactive component of the SED task. Results from the SIN task, underlying retrospective classifiers for the surveillance event detection and a discussion of the contrasting aims of the SAVASA user interface compared with the TRECVid task requirements are presented

    Action localization in video using a graph-based feature representation

    Get PDF
    We propose a new framework for human action localization in video sequences. The option to not only detect but also localize actions in surveillance video is crucial to improving system\u27s ability to manage high volumes of CCTV. In the approach, the action localization task is formulated the maximum-path finding problem in the directed spatio-temporal video-graph. The graph is constructed on the top of frame and temporal-based low-level features. To localize actions in the video-graph, we apply a maximum-path algorithm to find the path in the graph that is considered to be the localized action in the video. The proposed approach achieves competitive performance with the J-HMDB and the UCF-Sports dataset
    corecore